Your browser doesn't support javascript.
Show: 20 | 50 | 100
Results 1 - 20 de 25
Filter
Add filters

Document Type
Year range
1.
Joint 22nd IEEE International Symposium on Computational Intelligence and Informatics and 8th IEEE International Conference on Recent Achievements in Mechatronics, Automation, Computer Science and Robotics, CINTI-MACRo 2022 ; : 233-238, 2022.
Article in English | Scopus | ID: covidwho-2266905

ABSTRACT

The ability to explain the reasons for one's decisions to others is an important aspect of being human intelligence. We will look at the explainability aspects of the deep learning models, which are most frequently used in medical image processing tasks. The Explainability of machine learning models in medicine is essential for understanding how the particular ML model works and how it solves the problems it was designed for. The work presented in this paper focuses on the classification of lung CT scans for the detection of COVID-19 patients. We used CNN and DenseNet models for the classification and explored the application of selected visual explainability techniques to provide insight into how the model works when processing the images. © 2022 IEEE.

2.
IEEE Access ; : 1-1, 2023.
Article in English | Scopus | ID: covidwho-2260137

ABSTRACT

Deep Learning has been used for several applications including the analysis of medical images. Some transfer learning works show that an improvement in performance is obtained if a pre-trained model on ImageNet is transferred to a new task. Taking into account this, we propose a method that uses a pre-trained model on ImageNet to fine-tune it for Covid-19 detection. After the fine-tuning process, the units that produce a variance equal to zero are removed from the model. Finally, we test the features of the penultimate layer in different classifiers removing those that are less important according to the f-test. The results produce models with fewer units than the transferred model. Also, we study the attention of the neural network for classification. Noise and metadata printed in medical images can bias the performance of the neural network and it obtains poor performance when the model is tested on new data. We study the bias of medical images when raw and masked images are used for training deep models using a transfer learning strategy. Additionally, we test the performance on novel data in both models: raw and masked data. Author

3.
2022 International Conference on Data Analytics for Business and Industry, ICDABI 2022 ; : 6-10, 2022.
Article in English | Scopus | ID: covidwho-2284184

ABSTRACT

Because of the Covid 19 outbreak, which forced the closure of schools and institutions in India after March 2020, online education gained even greater traction in the country. The flexibility of online education allows students to rewatch recorded lectures as many times as they need to fully comprehend the topic. However, there are challenges and ethical concerns associated with using this technology in the classroom. Future possibilities, benefits, and drawbacks of using AI in the classroom are yet to be explored. Despite the great popularity and efficiency of online education, some studies have demonstrated that it may be detrimental to students. This research investigates students' attitudes on the use of technology in the classroom. We use the idea of Explainable Machine Learning (XML), in which the outcomes of ML calculations are explicable to people. On the other hand, the 'black box' approach holds that not even the creators of an AI can explain how it arrived at a certain choice. Several machine learning methods were used to forecast the extent to which students will embrace Industry 4.0 features. The most successful method was shown to be Neural Network (NN) which achieved an impressive 93% accuracy in classification. By detailing the inner workings of models to give some level of explainability, we can fully grasp the promise of this algorithm. © 2022 IEEE.

4.
2022 IEEE International Conference on E-health Networking, Application and Services, HealthCom 2022 ; : 246-251, 2022.
Article in English | Scopus | ID: covidwho-2213190

ABSTRACT

In the current era of big data, very large amounts of data are generating at a rapid rate from a wide variety of rich data sources. Electronic health (e-health) records are examples of the big data. With the technological advancements, more healthcare practice has gradually been supported by electronic processes and communication. This enables health informatics, in which computer science meets the healthcare sector to address healthcare and medical problems. Embedded in the big data are valuable information and knowledge that can be discovered by data science, data mining and machine learning techniques. Many of these techniques apply "opaque box"approaches to make accurate predictions. However, these techniques may not be crystal clear to the users. As the users not necessarily be able to clearly view the entire knowledge discovery (e.g., prediction) process, they may not easily trust the discovered knowledge (e.g., predictions). Hence, in this paper, we present a system for providing trustworthy explanations for knowledge discovered from e-health records. Specifically, our system provides users with global explanations for the important features among the records. It also provides users with local explanations for a particular record. Evaluation results on real-life e-health records show the practicality of our system in providing trustworthy explanations to knowledge discovered (e.g., accurate predictions made). © 2022 IEEE.

5.
Inf Process Manag ; 60(3): 103276, 2023 May.
Article in English | MEDLINE | ID: covidwho-2179772

ABSTRACT

The COVID-19 pandemic has spurred a large amount of experimental and observational studies reporting clear correlation between the risk of developing severe COVID-19 (or dying from it) and whether the individual is male or female. This paper is an attempt to explain the supposed male vulnerability to COVID-19 using a causal approach. We proceed by identifying a set of confounding and mediating factors, based on the review of epidemiological literature and analysis of sex-dis-aggregated data. Those factors are then taken into consideration to produce explainable and fair prediction and decision models from observational data. The paper outlines how non-causal models can motivate discriminatory policies such as biased allocation of the limited resources in intensive care units (ICUs). The objective is to anticipate and avoid disparate impact and discrimination, by considering causal knowledge and causal-based techniques to compliment the collection and analysis of observational big-data. The hope is to contribute to more careful use of health related information access systems for developing fair and robust predictive models.

6.
BMC Med Inform Decis Mak ; 22(1): 340, 2022 12 28.
Article in English | MEDLINE | ID: covidwho-2196239

ABSTRACT

BACKGROUND: This study aimed to explore whether explainable Artificial Intelligence methods can be fruitfully used to improve the medical management of patients suffering from complex diseases, and in particular to predict the death risk in hospitalized patients with SARS-Cov-2 based on admission data. METHODS: This work is based on an observational ambispective study that comprised patients older than 18 years with a positive SARS-Cov-2 diagnosis that were admitted to the hospital Azienda Ospedaliera "SS Antonio e Biagio e Cesare Arrigo", Alessandria, Italy from February, 24 2020 to May, 31 2021, and that completed the disease treatment inside this structure. The patients'medical history, demographic, epidemiologic and clinical data were collected from the electronic medical records system and paper based medical records, entered and managed by the Clinical Study Coordinators using the REDCap electronic data capture tool patient chart. The dataset was used to train and to evaluate predictive ML models. RESULTS: We overall trained, analysed and evaluated 19 predictive models (both supervised and unsupervised) on data from 824 patients described by 43 features. We focused our attention on models that provide an explanation that is understandable and directly usable by domain experts, and compared the results against other classical machine learning approaches. Among the former, JRIP showed the best performance in 10-fold cross validation, and the best average performance in a further validation test using a different patient dataset from the beginning of the third COVID-19 wave. Moreover, JRIP showed comparable performances with other approaches that do not provide a clear and/or understandable explanation. CONCLUSIONS: The ML supervised models showed to correctly discern between low-risk and high-risk patients, even when the medical disease context is complex and the list of features is limited to information available at admission time. Furthermore, the models demonstrated to reasonably perform on a dataset from the third COVID-19 wave that was not used in the training phase. Overall, these results are remarkable: (i) from a medical point of view, these models evaluate good predictions despite the possible differences entitled with different care protocols and the possible influence of other viral variants (i.e. delta variant); (ii) from the organizational point of view, they could be used to optimize the management of health-care path at the admission time.


Subject(s)
COVID-19 , Humans , COVID-19/diagnosis , SARS-CoV-2 , COVID-19 Testing , Artificial Intelligence , Machine Learning , Retrospective Studies
7.
24th International Conference on Human-Computer Interaction, HCII 2022 ; 13518 LNCS:441-460, 2022.
Article in English | Scopus | ID: covidwho-2173820

ABSTRACT

This paper presents a user-centered approach to translating techniques and insights from AI explainability research to developing effective explanations of complex issues in other fields, on the example of COVID-19. We show how the problem of AI explainability and the explainability problem in the COVID-19 pandemic are related: as two specific instances of a more general explainability problem, occurring when people face in-transparent, complex systems and processes whose functioning is not readily observable and understandable to them ("black boxes”). Accordingly, we discuss how we applied an interdisciplinary, user-centered approach based on Design Thinking to develop a prototype of a user-centered explanation for a complex issue regarding people's perception of COVID-19 vaccine development. The developed prototype demonstrates how AI explainability techniques can be adapted and integrated with methods from communication science, visualization and HCI to be applied to this context. We also discuss results from a first evaluation in a user study with 88 participants and outline future work. The results indicate that it is possible to effectively apply methods and insights from explainable AI to explainability problems in other fields and support the suitability of our conceptual framework to inform that. In addition, we show how the lessons learned in the process provide new insights for informing further work on user-centered approaches to explainable AI itself. © 2022, The Author(s).

8.
24th International Conference on Human-Computer Interaction, HCII 2022 ; 1655 CCIS:647-654, 2022.
Article in English | Scopus | ID: covidwho-2173733

ABSTRACT

Algorithms have advanced in status from supporting human decision-making to making decisions for themselves. The fundamental issue here is the relationship between Big Data and algorithms, or how algorithms empower data with direction and purpose. In this paper, I provide a conceptual framework for analyzing and improving ethical decision-making in Human-AI interaction. On the one hand, I examine the challenges and the limitations facing the field of Machine Ethics and Explainability in its aim to provide and justify ethical decisions. On the other hand, I propose connecting counterfactual explanations with the emotion of regret, as requirements for improving ethical decision-making in novel situations and under uncertainty. To test whether this conceptual framework has empirical value, I analyze the COVID-19 epidemic in terms of "what might have been” to answer the following question: could some of the unintended consequences of this health crisis have been avoided if the available data had been used differently before the crisis happened and as it unfolded? © 2022, The Author(s), under exclusive license to Springer Nature Switzerland AG.

10.
Sensors (Basel) ; 22(24)2022 Dec 18.
Article in English | MEDLINE | ID: covidwho-2163571

ABSTRACT

The novel coronavirus (COVID-19), which emerged as a pandemic, has engulfed so many lives and affected millions of people across the world since December 2019. Although this disease is under control nowadays, yet it is still affecting people in many countries. The traditional way of diagnosis is time taking, less efficient, and has a low rate of detection of this disease. Therefore, there is a need for an automatic system that expedites the diagnosis process while retaining its performance and accuracy. Artificial intelligence (AI) technologies such as machine learning (ML) and deep learning (DL) potentially provide powerful solutions to address this problem. In this study, a state-of-the-art CNN model densely connected squeeze convolutional neural network (DCSCNN) has been developed for the classification of X-ray images of COVID-19, pneumonia, normal, and lung opacity patients. Data were collected from different sources. We applied different preprocessing techniques to enhance the quality of images so that our model could learn accurately and give optimal performance. Moreover, the attention regions and decisions of the AI model were visualized using the Grad-CAM and LIME methods. The DCSCNN combines the strength of the Dense and Squeeze networks. In our experiment, seven kinds of classification have been performed, in which six are binary classifications (COVID vs. normal, COVID vs. lung opacity, lung opacity vs. normal, COVID vs. pneumonia, pneumonia vs. lung opacity, pneumonia vs. normal) and one is multiclass classification (COVID vs. pneumonia vs. lung opacity vs. normal). The main contributions of this paper are as follows. First, the development of the DCSNN model which is capable of performing binary classification as well as multiclass classification with excellent classification accuracy. Second, to ensure trust, transparency, and explainability of the model, we applied two popular Explainable AI techniques (XAI). i.e., Grad-CAM and LIME. These techniques helped to address the black-box nature of the model while improving the trust, transparency, and explainability of the model. Our proposed DCSCNN model achieved an accuracy of 98.8% for the classification of COVID-19 vs normal, followed by COVID-19 vs. lung opacity: 98.2%, lung opacity vs. normal: 97.2%, COVID-19 vs. pneumonia: 96.4%, pneumonia vs. lung opacity: 95.8%, pneumonia vs. normal: 97.4%, and lastly for multiclass classification of all the four classes i.e., COVID vs. pneumonia vs. lung opacity vs. normal: 94.7%, respectively. The DCSCNN model provides excellent classification performance consequently, helping doctors to diagnose diseases quickly and efficiently.


Subject(s)
COVID-19 , Humans , COVID-19/diagnostic imaging , Artificial Intelligence , X-Rays , Neural Networks, Computer
11.
28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, KDD 2022 ; : 4684-4694, 2022.
Article in English | Scopus | ID: covidwho-2020405

ABSTRACT

In the fight against the COVID-19 pandemic, vaccines are the most critical resource but are still in short supply around the world. Therefore, efficient vaccine allocation strategies are urgently called for, especially in large-scale metropolis where uneven health risk is manifested in nearby neighborhoods. However, there exist several key challenges in solving this problem: (1) great complexity in the large scale scenario adds to the difficulty in experts' vaccine allocation decision making;(2) heterogeneous information from all aspects in the metropolis' contact network makes information utilization difficult in decision making;(3) when utilizing the strong decision-making ability of reinforcement learning (RL) to solve the problem, poor explainability limits the credibility of the RL strategies. In this paper, we propose a reinforcement learning enhanced experts method. We deal with the great complexity via a specially designed algorithm aggregating blocks in the metropolis into communities and we hierarchically integrate RL among the communities and experts solution within each community. We design a self-supervised contact network representation algorithm to fuse the heterogeneous information for efficient vaccine allocation decision making. We conduct extensive experiments in three metropolis with real-world data and prove that our method outperforms the best baseline, reducing 9.01% infections and 12.27% deaths.We further demonstrate the explainability of the RL model, adding to its credibility and also enlightening the experts in turn. © 2022 Owner/Author.

12.
Ieee Transactions on Emerging Topics in Computational Intelligence ; : 10, 2022.
Article in English | Web of Science | ID: covidwho-1978407

ABSTRACT

The upheaval brought by the arrival of the COVID-19 pandemic has continued to bring fresh challenges over the past two years. During this COVID-19 pandemic, there has been a need for rapid identification of infected patients and specific delineation of infection areas in computed tomography (CT) images. Although deep supervised learning methods have been established quickly, the scarcity of both image-level and pixel-level labels as well as the lack of explainable transparency still hinder the applicability of AI. Can we identify infected patients and delineate the infections with extreme minimal supervision? Semi-supervised learning has demonstrated promising performance under limited labelled data and sufficient unlabelled data. Inspired by semi-supervised learning, we propose a model-agnostic calibrated pseudo-labelling strategy and apply it under a consistency regularization framework to generate explainable identification and delineation results. We demonstrate the effectiveness of our model with the combination of limited labelled data and sufficient unlabelled data or weakly-labelled data. Extensive experiments have shown that our model can efficiently utilize limited labelled data and provide explainable classification and segmentation results for decision-making in clinical routine.

13.
Ieee Transactions on Computational Social Systems ; : 14, 2022.
Article in English | Web of Science | ID: covidwho-1895932

ABSTRACT

Fake news is a major threat to democracy (e.g., influencing public opinion), and its impact cannot be understated particularly in our current socially and digitally connected society. Researchers from different disciplines (e.g., computer science, political science, information science, and linguistics) have also studied the dissemination, detection, and mitigation of fake news;however, it remains challenging to detect and prevent the dissemination of fake news in practice. In addition, we emphasize the importance of designing artificial intelligence (AI)-powered systems that are capable of providing detailed, yet user-friendly, explanations of the classification / detection of fake news. Hence, in this article, we systematically survey existing state-of-the-art approaches designed to detect and mitigate the dissemination of fake news, and based on the analysis, we discuss several key challenges and present a potential future research agenda, especially incorporating AI explainable fake news credibility system.

14.
2nd IEEE International Conference on Autonomic Computing and Self-Organizing Systems (ACSOS) ; : 138-144, 2021.
Article in English | Web of Science | ID: covidwho-1895884

ABSTRACT

Ventilated intensive care patients represent a sizable group in the intensive care unit that requires special attention. Although intensive care units are staffed with more nurses per patient than regular wards, the situation is often precarious. A situation that has become more acute during the COVID-19 pandemic. Weaning from mechanical ventilation as well as the limited communication abilities pose substantial stress to the patients. The incapability to impart even basic needs may negatively impact the healing process and can lead to delirium and other complications. To support the communication and information of weaning patients as well as to foster patient autonomy, we are developing a smart environment that is tailored to the intensive care context. While the provision and connection of smart objects and applications for this purpose can be timeconsuming, self-organization and self-explainability may present helpful tools to reduce the effort. In this paper, we present a framework for self-explaining and semi-automatically interconnected ensembles of smart objects and ambient applications (that are integrated into smart spaces) used to realize the assistive environment. Based on a description language for these components, ensembles can be dynamically connected and tailored to the needs and abilities of the patients. Our framework has been developed and evaluated iteratively and has been tested successfully in our laboratory.

15.
2021 International Conference on Computational Performance Evaluation, ComPE 2021 ; : 598-603, 2021.
Article in English | Scopus | ID: covidwho-1831736

ABSTRACT

This COVID-19 pandemic has overburdened the government and the healthcare system of many countries around the world. It has brought up the need for a fast and accurate diagnosing method. Artificial intelligence (AI) is having a notable role in different aspects of the pandemic- contact tracing, epidemiology, medical diagnosis and prognosis, and drug development. Deep learning has found its application in the diagnosis of COVID-19 chest X-rays (CXR) using convolution neural nets. Many architectures have been used and transfer learning is the most preferred approach. These models have proven to be fast and accurate in COVID-19 diagnosis. However, one key element that has prevented the use of AI in clinical practice is its lack of transparency and explainability. In this paper, we use the ResNet-50 pre-trained model for classifying the CXR of COVID-19 patients from pneumonia and normal patients. We then use explainability algorithms to visualize the model features and verify the explainability of the model. © 2021 IEEE.

16.
Comput Methods Programs Biomed ; 220: 106824, 2022 Jun.
Article in English | MEDLINE | ID: covidwho-1797041

ABSTRACT

BACKGROUND AND OBJECTIVE: Artificial Intelligence has proven to be effective in radiomics. The main problem in using Artificial Intelligence is that researchers and practitioners are not able to know how the predictions are generated. This is currently an open issue because results' explainability is advantageous in understanding the reasoning behind the model, both for patients than for implementing a feedback mechanism for medical specialists using decision support systems. METHODS: Addressing transparency issues related to the Artificial Intelligence field, the innovative technique of Formal methods use a mathematical logic reasoning to produce an automatic, quick and reliable diagnosis. In this paper we analyze results given by the adoption of Formal methods for the diagnosis of the Coronavirus disease: specifically, we want to analyse and understand, in a more medical way, the meaning of some radiomic features to connect them with clinical or radiological evidences. RESULTS: In particular, the usage of Formal methods allows the authors to do statistical analysis on the feature value distributions, to do pattern recognition on disease models, to generalize the model of a disease and to reach high performances of results and interpretation of them. A further step for explainability can be accounted by the localization and selection of the most important slices in a multi-slice approach. CONCLUSIONS: In conclusion, we confirmed the clinical significance of some First order features as Skewness and Kurtosis. On the other hand, we suggest to decline the use of the Minimum feature because of its intrinsic connection with the Computational Tomography exam of the lung.


Subject(s)
Artificial Intelligence , Radiology , Humans , Tomography, X-Ray Computed
17.
Viruses ; 14(3)2022 03 17.
Article in English | MEDLINE | ID: covidwho-1753689

ABSTRACT

Coronavirus disease 2019 (COVID-19) has resulted in approximately 5 million deaths around the world with unprecedented consequences in people's daily routines and in the global economy. Despite vast increases in time and money spent on COVID-19-related research, there is still limited information about the factors at the country level that affected COVID-19 transmission and fatality in EU. The paper focuses on the identification of these risk factors using a machine learning (ML) predictive pipeline and an associated explainability analysis. To achieve this, a hybrid dataset was created employing publicly available sources comprising heterogeneous parameters from the majority of EU countries, e.g., mobility measures, policy responses, vaccinations, and demographics/generic country-level parameters. Data pre-processing and data exploration techniques were initially applied to normalize the available data and decrease the feature dimensionality of the data problem considered. Then, a linear ε-Support Vector Machine (ε-SVM) model was employed to implement the regression task of predicting the number of deaths for each one of the three first pandemic waves (with mean square error of 0.027 for wave 1 and less than 0.02 for waves 2 and 3). Post hoc explainability analysis was finally applied to uncover the rationale behind the decision-making mechanisms of the ML pipeline and thus enhance our understanding with respect to the contribution of the selected country-level parameters to the prediction of COVID-19 deaths in EU.


Subject(s)
COVID-19 , COVID-19/epidemiology , Europe/epidemiology , Humans , Machine Learning , Risk Factors , Support Vector Machine
18.
6th International Conference on Image Information Processing, ICIIP 2021 ; 2021-November:395-400, 2021.
Article in English | Scopus | ID: covidwho-1741196

ABSTRACT

Covid-19 has been a great disaster for the entire world. It is caused by the novel coronavirus, which is highly contagious. Detection of Covid-19 can be done either through saliva or through a CT scan. Given the scale at which this Covid-19 can spread, an automated detection is required which can be adopted at large scale. In this work, we focus on the detection of Covid-19 through CT scan images. Our work evaluates well-known CNN architecture-based models in different experimental settings: fine-tuning, removal of pre-trained layers, and data augmentation. For evaluation, we use the dataset of images comprising Covid-19 CT scans. We analyze the performance of VGG-16, InceptionNet, and ResNet. After rigorous experiments, the InceptionNet model performs the best with 0.99 AUC outperforming the prior work (which claimed 0.98 AUC), with the training accuracy and testing accuracy of 99.94% and 96.43%, respectively. Furthermore, we also perform explainability experiments on both Covid and Non-Covid CT-Scan images. © 2021 IEEE.

19.
Data & Policy ; 4, 2022.
Article in English | ProQuest Central | ID: covidwho-1699689

ABSTRACT

A number of governmental and nongovernmental organizations have made significant efforts to encourage the development of artificial intelligence in line with a series of aspirational concepts such as transparency, interpretability, explainability, and accountability. The difficulty at present, however, is that these concepts exist at a fairly level, whereas in order for them to have the tangible effects desired they need to become more concrete and specific. This article undertakes precisely this process of concretisation, mapping how the different concepts interrelate and what in particular they each require in order to move from being high-level aspirations to detailed and enforceable requirements. We argue that the key concept in this process is accountability, since unless an entity can be held accountable for compliance with the other concepts, and indeed more generally, those concepts cannot do the work required of them. There is a variety of taxonomies of accountability in the literature. However, at the core of each account appears to be a sense of “answerability”;a need to explain or to give an account. It is this ability to call an entity to account which provides the impetus for each of the other concepts and helps us to understand what they must each require.

20.
33rd IEEE International Conference on Tools with Artificial Intelligence, ICTAI 2021 ; 2021-November:841-845, 2021.
Article in English | Scopus | ID: covidwho-1685095

ABSTRACT

Local Interpretable Model-Agnostic Explanations (LIME) and SHapley Additive exPlanations (SHAP) algorithms have been widely discussed by the Explainable AI (XAI) community but their application to wider domains are rare, potentially due to the lack of easy-to-use tools built around these methods. In this paper, we present ExMed, a tool that enables XAI data analytics for domain experts without requiring explicit programming skills. It supports data analytics with multiple feature attribution algorithms for explaining machine learning classifications and regressions. We illustrate its domain of applications on two real world medical case studies, with the first one analysing COVID-19 control measure effectiveness and the second one estimating lung cancer patient life expectancy from the artificial Simulacrum health dataset. We conclude that ExMed can provide researchers and domain experts with a tool that both concatenates flexibility and transferability of medical sub-domains and reveal deep insights from data. © 2021 IEEE.

SELECTION OF CITATIONS
SEARCH DETAIL